RattusRattus, Isy, smcv have all just left after a very long day. Steve is finishing up the final stages. The mayhem has quietened, the network cables are coiled, pretty much everything is tidied away. A new experience for two of us - I just hope it hasn't put them off too much.The IRC channels are quiet and we can put this one to bed after a good day's work well done.
Culture
Just before I start, I would like to point out that this post may or would probably be NSFW. Again, what is SFW (Safe at Work) and NSFW that so much depends on culture and perception of culture from wherever we are or wherever we take birth? But still, to be on the safe side I have put it as NSFW. Now there have been a few statements and ideas that gave me a pause. This will be a sort of chaotic blog post as I am in such a phase today.
For e.g. while I do not know which culture or which country this comes from, somebody shared that in some cultures one can talk/comment May your poop be easy and with a straight face. I dunno which culture is this but if somebody asked me that I would just die from laughing or maybe poop there itself. While I can understand if it is a constipated person, but a whole culture? Until and unless their DNA is really screwed, I don t think so but then what do I know? I do know that we shit when we have extreme reactions of either joy or fear. And IIRC, this comes from mammal response when they were in dangerous situations and we got the same as humans evolved. I would really be interested to know which culture is that. I did come to know that the Japanese do wish that you may not experience hard work or something to that effect while ironically they themselves are becoming extinct due to hard work and not enough relaxation, toxic workplace is common in Japan according to social scientists and population experts.
Another term that I couldn t figure out is The Florida Man Strikes again and this term is usually used when somebody does something stupid or something weird. While it is exclusively used in the American context, I am curious to know how that came about. Why does Florida have such people or is it an exaggeration? I have heard the term e.g. What happens in Vegas, stays in Vegas . Think it is also called Sin city although why just Vegas is beyond me?
Omicron-8712 Blood pressure machine
I felt so stupid. I found another site or e-commerce site called Wellness Forever. They had the blood pressure machine I wanted, an Omron-8172. I bought it online and they delivered the same within half an hour. Amazon took six days and in the end, didn t deliver it at all.
I tried taking measurements from it yesterday. I have yet to figure out what it all means but I did get measurements of 109 SYS, 88 DIA and Pulse is 72. As far as the pulse is concerned, guess that is normal, the others just don t know. If only I had known this couple of months ago. I was able to register the product as well as download and use the Omron Connect app. For roughly INR 2.5k you have a sort of health monitoring system. It isn t Star Trek Tricorder in any shape or form but it will have to do while the tricorder gets invented. And while we are on the subject let s not forget Elizabeth Holmes and the scam called Theranos. It really is something to see How Elizabeth Holmes modeled so much of herself on Steve Jobs mimicking how he left college/education halfway. A part of me is sad that Theranos is not real. Joe Scott just a few days ago shared some perspectives on the same just a few days ago. The idea in itself is pretty seductive, to say the least, and that is the reason the scam went on for more than a decade and perhaps would have been longer if some people hadn t gotten the truth out.
I do see potentially, something like that coming on as A.I. takes a bigger role in automating testing. Half a decade to a decade from now, who knows if there is an algorithm that is able to do what is needed? If such a product were to come to the marketplace at a decent price, it would revolutionize medicine, especially in countries like India, South Africa, and all sorts of remote places. Especially, with all sorts of off-grid technologies coming and maturing in the marketplace. Before I forget, there is a game called Cell on Android that tells or shares about the evolution of life on earth. It also shares credence to the idea that life has come 6 times on Earth and has been destroyed multiple times by asteroids. It is in the idle sort of game format, so you can see the humble beginnings from the primordial soup to various kinds of cells and bacteria to finally a mammal. This is where I am and a long way to go.
Indian Bureaucracy
One of the few things that Britishers gave to India, is the bureaucracy and the bureaucracy tests us in myriad ways. It would be full 2 months on 5th September and I haven t yet got a death certificate. And I need that for a sundry number of things. The same goes for a disability certificate. What is and was interesting is my trip to the local big hospital called Sassoon Hospital. My mum had shared incidents that occurred in the 1950s when she and the family had come to Pune. According to her, when she was alive, while Sassoon was the place to be, it was big and chaotic and you never knew where you are going. That was in 1950, I had the same experience in 2022. The term/adage the more things change, the more they remain the same seems to be held true for Sassoon Hospital.
Btw, those of you who think the Devil exists, he is totally a fallacy. There is a popular myth that the devil comes to deal that he/she/they come to deal with you when somebody close to you passes, I was waiting desperately for him when mum passed. Any deal that he/she/they would have offered me I would have gladly taken, but all my wait was all for nothing. While I believe evil exists, that is manifested by humans and nobody else. The whole idea and story of the devil is just to control young children and nothing beyond that
Debconf 2023, friends, JPEGOptim, and EV s
Quite a number of friends had gone to Albania this year as India won the right to host Debconf for the year 2023. While I did lurk on the Debconf orga IRC channel, I m not sure how helpful I would be currently. One news that warmed my heart is some people would be coming to India to check the site way before and make sure things go smoothly. Nothing like having more eyes (in this case bodies) to throw at a problem and hopefully it will be sorted. While I have not been working for the last couple of years, one of the things that I had to do and have been doing is moving a lot of stuff online. This is in part due to the Government s own intention of having everything on the cloud. One of the things I probably may have shared it more than enough times is that the storage most of these sites give is like the 1990s. I tried jpegoptim and while it works, it degrades the quality of the image quite a bit. The whole thing seems backward, especially as newer and newer smartphones are capturing more data per picture (megapixel resolution), case in point Samsung Galaxy A04 that is being introduced. But this is not only about newer phones, even my earlier phone, Samsung J-5/500 which I bought in 2016 took images at 5 MB. So it is not a new issue but a continuous issue. And almost all Govt. sites have the upper band fixed at 1 MB. But this is not limited to Govt. sites alone, most sites in India are somewhat frozen in the 1990s. And it isn t as if resources for designing web pages using HTML5, CSS3, Javascript, Python, or Java aren t available. If worse comes to worst, one can even use amp to make his, her or their point. But this is if they want to do stuff. I would be sharing a few photos with commentary, there are still places where I can put photos apart from social media
Friends
Last week, Saturday suddenly all the friends decided to show up. I have no clue one way or the other why but am glad they showed up.
I will have to be a bit rapid about what I am sharing above so here goes nothing
1. The first picture shows Mahendra, Akshat, me, and Sagar Sukhose (Mangesh s friend). The picture was taken by Mangesh Diwate. We talked quite a bit of various things that could be done in Debian. A few of the things that I shared were (bringing more stuff from BSD to Debian, I am sure there s still quite a lot of security software that could be advantageous to have in Debian.) The best person to talk to or guide about this would undoubtedly be Paul Wise or as he is affectionally called Pabs. He is one of the shy ones and yet knows so much about how things work. The one and only time I met him is 2016. The other thing that we talked about is porting Debian to one of the phones. This has been done in the past and done by a Puneitie some 4-5 years back. While I don t recollect the gentleman s name, I remember that the porting was done on a Motorola phone as that was the easiest to do. He had tried some other mobile but that didn t work. Making Debian available on phone is hard work. Just to have an idea, I went to the xda developers forum and found out that while M51 has been added, my specific phone model is not there. A Samsung Galaxy M52G Android (samsung; SM-M526B; lahaina; arm64-v8a) v12 . You look at the chat and you understand how difficult the process might be. One of the other ideas that Akshat pitched was Debian Astro, this is something that is close to the heart of many, including me. I also proposed to have some kind of web app or something where we can find and share about the various astronomy and related projects done by various agencies. While there is a NASA app, nothing comes close to JSR and that site just shares stuff, no speculation. There are so many projects taken or being done by the EU, JAXA, ISRO, and even middle-east countries are trying but other than people who are following some of the developments, we hear almost nothing. Even the Chinese have made some long strides but most people know nothing about the same. And it s sad to know that those developments are not being known, shared, or even speculated about as much as say NASA or SpaceX is. How do we go about it and how do we get people to contribute or ask questions around it would be interesting.
2. The second picture was something that was shared by Akshat. Akshat was sharing how in Albania people are moving on these electric scooters . I dunno if that is the right word for it or what. I had heard from a couple of friends who had gone to Vietnam a few years ago how most people in Vietnam had modified their scooters and they were snaking lines of electric wires charging scooters. I have no clue whether they were closer to Vespa or something like above. In India, the Govt. is in partnership with the oil, gas, and coal mafia just as it was in Australia (the new Govt. in Australia is making changes) the same thing is here. With the humongous profits that the oil sector provides the petro states and others, Corruption is bound to happen. We talk and that s the extent of things.
3. The third picture is from a nearby area called F.C. Road or Fergusson College Road. The area has come up quite sharply (commercially) in the last few years. Apparently, Mr. Kushal is making a real-life replica of Wall Street which would be given to commercial tenants. Right now the real estate market is tight in India, we will know how things pan out in the next few years.
4. Number four is an image of a Ganesh idol near my house. There is a 10-day festival of the elephant god that people used to celebrate every year. For the last couple of years because of the pandemic, people were unable to celebrate the festival as it is meant to celebrate. This time some people are going overboard while others are cautious and rightfully so.
5. Last and not least, one of the things that people do at this celebration is to have new clothes, so I shared a photo of a gentleman who had bought and was wearing new clothes. While most countries around the world are similar, Latin America is very similar to India in many ways, perhaps Gunnar can share. especially about religious activities. The elephant god is known for his penchant for sweets and that can be seen from his rounded stomach, that is also how he is celebrated. He is known to make problems disappear or that is supposed to be his thing. We do have something like 4 billion gods, so each one has to be given some work or quality to justify the same
And here we are: second day of the barbeque in Cambridge. Lots of food - as always - some alcohol, some soft drinks, coffee.Lots of good friends, and banter and good natured argument. For a couple of folk, it's their first time here - but most people have known each other for years. Lots of reminiscing, some crochet from two of us. Multiple technical discussions weaving and overlapping Not just meat and vegetarian options for food: a fresh loaf, gingerbread of various sorts, fresh Belgian-style waffles.I''m in the front room: four of us silently on laptops, one on a phone. Sounds of a loud game of Mao from the garden - all very normal for this time of year.Thanks to Jo and Steve, to all the cooks and folk sorting things out. One more night and I'll have done my first full BBQ here. Diet and slimming - what diet?
When I first moved from being a technical consultant to a manager of other consultants, I took a 5-day course Managing Technical Teams a bootstrap for managing people within organisations, but with a particular focus on technical people. We do have some particular quirks, after all
Two elements of that course keep coming to mind when doing Debian work, and they both relate to how teams fit together and get stuff done.
Tuckman s four stages model
In the mid-1960s Bruce W. Tuckman developed a four-stage descriptive model of the stages a project team goes through in its lifetime. They are:
Forming: the team comes together and its members are typically motivated and excited, but they often also feel anxiety or uncertainty about how the team will operate and their place within it.
Storming: initial enthusiasm can give way to frustration or disagreement about goals, roles, expectations and responsibilities. Team members are establishing trust, power and status. This is the most critical stage.
Norming: team members take responsibility and share a common goal. They tolerate the whims and fancies of others, sometimes at the expense of conflict and sharing controversial ideas.
Performing: team members are confident, motivated and knowledgeable. They work towards the team s common goal. The team is high-achieving.
Resolved disagreements and personality clashes result in greater intimacy, and a spirit of co-operation emerges.
Teams need to understand these stages because a team can regress to earlier stages when its composition or goals change. A new member, the departure of an existing member, changes in supervisor or leadership style can all lead a team to regress to the storming stage and fail to perform for a time.
When you see a team member say this, as I observed in an IRC channel recently, you know the team is performing:
nice teamwork these busy days Seen on IRC in the channel of a performing team
Tuckman s model describes a team s performance overall, but how can team members establish what they can contribute and how can they go doing so confidently and effectively?
Belbin s Team Roles
The types of behaviour in which people engage are infinite. But the range of useful behaviours, which make an effective contribution to team performance, is finite. These behaviours are grouped into a set number of related clusters, to which the term Team Role is applied. Belbin, R M. Team Roles at Work. Oxford: Butterworth-Heinemann, 2010
Dr Meredith Belbin s thesis, based on nearly ten years research during the 1970s and 1980s, is that each team has a number of roles which need to be filled at various times, but they re not innate characteristics of the people filling them. People may have attributes which make them more or less suited to each role, and they can consciously take up a role if they recognise its need in the team at a particular time.
Belbin s nine team roles are:
Resource investigator (people): outgoing; enthusiastic; has lots of contacts knows someone who might know someone who knows how to solve a problem. Associated weaknessses: over-optimism, enthusiasm wanes quickly
Co-ordinator (people): mature; confident; identifies talent; clarifies goals and delegates effectively. Associated weaknesses: may be seen as manipulative; offloads own share of work.
Shaper (action): challenging; dynamic; has drive. Describes what they want and when they want it. Associated weaknesses: prone to provocation; offends others feelings.
Monitor/evaluator (thinking): sees all options, judges accurately. Best given data and options and asked which the team should choose. Associated weaknesses: lacks drive; can be overly critical.
Teamworker (people): takes care of things behind the scenes; spots a problem and deals with it quietly without fuss. Averts friction. Associated weaknesses: indecisive; avoids confrontation.
Implementer (action): turns ideas into actions and organises work. Allowable weaknesses: somewhat inflexible; slow to respond to new possibilities.
Completer finisher (action): searches out errors; polishes and perfects. Despite the name, may never actually consider something finished . Associated weaknesses: inclined to worry; reluctant to delegate.
Specialist (thinking): knows or can acquire a wealth of things on a subject. Associated weaknesses: narrow focus; overwhelmes others with depth of knowledge.
(adapted from https://www.belbin.com/media/3471/belbin-team-role-descriptions-2022.pdf)
A well-balanced team, Belbin asserts, isn t comprised of multiples of nine individuals who fit into one of these roles permanently. Rather, it has a number of people who are comfortable to wear some of these hats as the need arises. It s even useful to use the team roles as language: for example, someone playing a shaper might say the way we ve always done this is holding us back , to which a co-ordinator s could respond Steve, Joanna put on your Plant hats and find some new ideas. Talk to Susan and see if she knows someone who s tackled this before. Present the options to Nigel and he ll help evaluate which ones might work for us.
Teams in Debian
There are all sort of teams in Debian those which are formally brought into operation by the DPL or the constitution; package maintenance teams; public relations teams; non-technical content teams; special interest teams; and a whole heap of others. Teams can be formal and informal, fleeting or long-lived, two people working together or dozens.
But they all have in common the Tuckman stages of their development and the Belbin team roles they need to fill to flourish. At some stage in their existence, they will all experience new or departing team members and a period of re-forming, norming and storming perhaps fleetingly, perhaps not. And at some stage they will all need someone to step into a team role, play the part and get the team one step further towards their goals.
Footnote
Belbin Associates, the company Meredith Belbin established to promote and continue his work, offers a personalised report with guidance about which roles team members show the strongest preferences for, and how to make best use of them in various settings. They re quick to complete and can also take into account observers , i.e. how others see a team member. All my technical staff go through this process blind shortly after they start, so as not to bias their input, and then we discuss the roles and their report in detail as a one-to-one.
There are some teams in Debian for which this process and discussion as a group activity could be invaluable. I have no particular affiliation with Belbin Associates other than having used the reports and the language of team roles for a number of years. If there s sufficient interest for a BoF session at the next DebConf, I could probably be persuaded to lead it.
Photo by Josh Calabrese on Unsplash
Recently I've been working with simple/trivial scripting languages, and I guess I finally reached a point where I thought "Lisp? Why not". One of the reasons for recent experimentation was thinking about the kind of minimalism that makes implementing a language less work - being able to actually use the language to write itself.
FORTH is my recurring example, because implementing it mostly means writing a virtual machine which consists of memory ("cells") along with a pair of stacks, and some primitives for operating upon them. Once you have that groundwork in place you can layer the higher-level constructs (such as "for", "if", etc).
Lisp allows a similar approach, albeit with slightly fewer low-level details required, and far less tortuous thinking. Lisp always feels higher-level to me anyway, given the explicit data-types ("list", "string", "number", etc).
Here's something that works in my toy lisp:
;; Define a function, fact , to calculate factorials (recursively).
(define fact (lambda (n)
(if (<= n 1)
1
(* n (fact (- n 1))))))
;; Invoke the factorial function, using apply
(apply (list 1 2 3 4 5 6 7 8 9 10)
(lambda (x)
(print "%s! => %s" x (fact x))))
The core language doesn't have helpful functions to filter lists, or build up lists by applying a specified function to each member of a list, but adding them is trivial using the standard car, cdr, and simple recursion. That means you end up writing lots of small functions like this:
This all feels very sexy and simple, because the implementations of map, apply, filter are all written using the lisp - and they're easy to write.
Lisp takes things further than some other "basic" languages because of the (infamous) support for Macros. But even without them writing new useful functions is pretty simple. Where things struggle? I guess I don't actually have a history of using lisp to actually solve problems - although it's great for configuring my editor..
Anyway I guess the journey continues. Having looked at the obvious "minimal core" languages I need to go further afield:
We're flagging a bit now, I think but close to the end. The standard Debian images caused no problems: Sledge and I are just finishing up the last few live images to test now.Thanks, as ever, to the crew: RattusRattus and Isy, Sledge struggling through feeling awful. No debian-edu testing today, unfortunately, but that almost never breaks anyway.Everyone's getting geared up for Kosovo - you'll see the other three there with any luck - and you'd catch all of us at the BBQ in Cambridge. It's going to be a hugely busy month and a bit for Steve and the others. :)
So my previous post introduced a trivial interpreter for a TCL-like language.
In the past week or two I've cleaned it up, fixed a bunch of bugs, and added 100% test-coverage. I'm actually pretty happy with it now.
One of the reasons for starting this toy project was to experiment with how easy it is to extend the language using itself
Some things are simple, for example replacing this:
puts "3 x 4 = [expr 3 * 4]"
With this:
puts "3 x 4 = [* 3 4]"
Just means defining a function (proc) named *. Which we can do like so:
proc * a b
expr $a * $b
(Of course we don't have lists, or variadic arguments, so this is still a bit of a toy example.)
Doing more than that is hard though without support for more primitives written in the parent language than I've implemented. The obvious thing I'm missing is a native implementation of upvalue, which is TCL primitive allowing you to affect/update variables in higher-scopes. Without that you can't write things as nicely as you would like, and have to fall back to horrid hacks or be unable to do things.
# define a procedure to run a body N times
proc repeat n body
set res ""
while > $n 0
decr n
set res [$body]
$res
# test it out
set foo 12
repeat 5 incr foo
# foo is now 17 (i.e. 12 + 5)
A similar story implementing the loop word, which should allow you to set the contents of a variable and run a body a number of times:
proc loop var min max bdy
// result
set res ""
// set the variable. Horrid.
// We miss upvalue here.
eval "set $var [set min]"
// Run the test
while <= [set "$$var"] $max
set res [$bdy]
// This is a bit horrid
// We miss upvalue here, and not for the first time.
eval incr "$var"
// return the last result
$res
loop cur 0 10 puts "current iteration $cur ($min->$max)"
# output is:
# => current iteration 0 (0-10)
# => current iteration 1 (0-10)
# ...
That said I did have fun writing some simple test-cases, and implementing assert, assert_equal, etc.
In conclusion I think the number of required primitives needed to implement your own control-flow, and run-time behaviour, is a bit higher than I'd like. Writing switch, repeat, while, and similar primitives inside TCL is harder than creating those same things in FORTH, for example.
LWN reminds us that Git still uses SHA-1 by default.
Commit or tag signing is not a mitigation, and to understand why you need to
know a little about Git s internal structure.
Git internally looks rather like a content-addressable filesystem, with four
object types: tags, commits, trees and blobs.
Content-addressable means changing the content of an object changes the way
you address or reference it, and this is achieved using a cryptographic hash
function. Here is an illustration of the internal structure of an example
repository I created, containing two files (./foo.txt and ./bar/bar.txt)
committed separately, and then tagged:
You can see how trees represent directories, blobs represent files, and so
on. Git can avoid internal duplication of files or directories which remain
identical. The hash function allows very efficient lookup of each object
within git s on-disk storage.
Tag and commit signatures do not directly sign the files in the repository;
that is, the input to the signature function is the content of the tag/commit
object, rather than the files themselves. This is analogous to the way that
GPG signatures actually sign a cryptographic hash of your email, and there
was a time when this too defaulted to SHA-1. An attacker who can break that
hash function can bypass the guarantees of the signature function.
A motivated attacker might be able to replace a blob, commit or tree in a git
repository using a SHA-1 collision. Replacing a blob seems easier to me than a
commit or tree, because there is no requirement that the content of the files
must conform to any particular format.
There is one key technical mitigation to this in Git, which is the SHA-1DC algorithm;
this aims to detect and prevent known collision attacks. However, I will have
to leave the cryptanalysis of this to the cryptographers!
So, is this in your threat model? Do we need to lobby GitHub for SHA-256
support? Either way, I look forward to the future operational challenge of
migrating the entire world s git repositories across to SHA-256.
Recently I was reading Antirez's piece TCL the Misunderstood again, which is a nice defense of the utility and value of the TCL language.
TCL is one of those scripting languages which used to be used a hell of a lot in the past, for scripting routers, creating GUIs, and more. These days it quietly lives on, but doesn't get much love. That said it's a remarkably simple language to learn, and experiment with.
Using TCL always reminds me of FORTH, in the sense that the syntax consists of "words" with "arguments", and everything is a string (well, not really, but almost. Some things are lists too of course).
A simple overview of TCL would probably begin by saying that everything is a command, and that the syntax is very free. There are just a couple of clever rules which are applied consistently to give you a remarkably flexible environment.
To get started we'll set a string value to a variable:
set name "Steve Kemp"
=> "Steve Kemp"
Now you can output that variable:
puts "Hello, my name is $name"
=> "Hello, my name is Steve Kemp"
OK, it looks a little verbose due to the use of set, and puts is less pleasant than print or echo, but it works. It is readable.
Next up? Interpolation. We saw how $name expanded to "Steve Kemp" within the string. That's true more generally, so we can do this:
set print pu
set me ts
$print$me "Hello, World"
=> "Hello, World"
There "$print" and "$me" expanded to "pu" and "ts" respectively. Resulting in:
puts "Hello, World"
That expansion happened before the input was executed, and works as you'd expect. There's another form of expansion too, which involves the [ and ] characters. Anything within the square-brackets is replaced with the contents of evaluating that body. So we can do this:
puts "1 + 1 = [expr 1 + 1]"
=> "1 + 1 = 2"
Perhaps enough detail there, except to say that we can use and to enclose things that are NOT expanded, or executed, at parse time. This facility lets us evaluate those blocks later, so you can write a while-loop like so:
set cur 1
set max 10
while expr $cur <= $max
puts "Loop $cur of $max"
incr cur
Anyway that's enough detail. Much like writing a FORTH interpreter the key to implementing something like this is to provide the bare minimum of primitives, then write the rest of the language in itself.
You can get a usable scripting language with only a small number of the primitives, and then evolve the rest yourself. Antirez also did this, he put together a small TCL interpreter in C named picol:
My code runs the original code from Antirez with only minor changes, and was a fair bit of fun to put together.
Because the syntax is so fluid there's no complicated parsing involved, and the core interpreter was written in only a few hours then improved step by step.
Of course to make a language more useful you need I/O, beyond just writing to the console - and being able to run the list-operations would make it much more useful to TCL users, but that said I had fun writing it, it seems to work, and once again I added fuzz-testers to the lexer and parser to satisfy myself it was at least somewhat robust.
Feedback welcome, but even in quiet isolation it's fun to look back at these "legacy" languages and recognize their simplicity lead to a lot of flexibility.
Recently I've been getting much more interested in the "retro" computers of my youth, partly because I've been writing crazy code in Z80 assembly-language, and partly because I've been preparing to introduce our child to his first computer:
An actual 1982 ZX Spectrum, cassette deck and all.
No internet
No hi-rez graphics
Easily available BASIC
And as a nice bonus the keyboard is wipe-clean!
I've got a few books, books I've hoarded for 30+ years, but I'd love to collect some more. So here's my request:
If you have any books covering either the Z80 processor, or the ZX Spectrum, please consider dropping me an email.
I'd be happy to pay 5-10 each for any book I don't yet own, and I'd also be more than happy to cover the cost of postage to Finland.
I'd be particularly pleased to see anything from Melbourne House, and while low-level is best, the coding-books from Usbourne (The Mystery Of Silver Mountain, etc, etc) wouldn't go amiss either.
I suspect most people who have collected and kept these wouldn't want to part with them, but just in case ..
Back in April 2021 I introduced a simple text-based adventure game, The Lighthouse of Doom, which I'd written in Z80 assembly language for CP/M systems.
As it was recently the 40th Anniversary of the ZX Spectrum 48k, the first computer I had, and the reason I got into programming in the first place, it crossed my mind that it might be possible to port my game from CP/M to the ZX Spectrum.
To recap my game is a simple text-based adventure game, which you can complete in fifteen minutes, or less, with a bunch of Paw Patrol easter-eggs.
You enter simple commands such as "up", "down", "take rug", etc etc.
You receive text-based replies "You can't see a telephone to use here!".
My code is largely table-based, having structures that cover objects, locations, and similar state-things. Most of the code involves working with those objects, with only a few small platform-specific routines being necessary:
Clearing the screen.
Pausing for "a short while".
Reading a line of input from the user.
Sending a $-terminated string to the console.
etc.
My feeling was that I could replace the use of those CP/M functions with something custom, and I'd have done the 99% of the work. Of course the devil is always in the details.
Let's start. To begin with I'm lucky in that I'm using the pasmo assembler which is capable of outputting .TAP files, which can be loaded into ZX Spectrum emulators.
I'm not going to walk through all the code here, because that is available within the project repository, but here's a very brief getting-started guide which demonstrates writing some code on a Linux host, and generating a TAP file which can be loaded into your favourite emulator. As I needed similar routines I started working out how to read keyboard input, clear the screen, and output messages which is what the following sample will demonstrate..
First of all you'll need to install the dependencies, specifically the assembler and an emulator to run the thing:
# apt install pasmo spectemu-x11
Now we'll create a simple assembly-language file, to test things out - save the following as hello.z80:
; Code starts here
org 32768
; clear the screen
call cls
; output some text
ld de, instructions ; DE points to the text string
ld bc, instructions_end-instructions ; BC contains the length
call 8252
; wait for a key
ld hl,0x5c08 ; LASTK
ld a,255
ld (hl),a
wkey:
cp (hl) ; wait for the value to change
jr z, wkey
; get the key and save it
ld a,(HL)
push af
; clear the screen
call cls
; show a second message
ld de, you_pressed
ld bc, you_pressed_end-you_pressed
call 8252
;; Output the ASCII character in A
ld a,2
call 0x1601
pop af
call 0x0010
; loop forever. simple demo is simple
endless:
jr endless
cls:
ld a,2
call 0x1601 ; ROM_OPEN_CHANNEL
call 0x0DAF ; ROM_CLS
ret
instructions:
defb 'Please press a key to continue!'
instructions_end:
you_pressed:
defb 'You pressed:'
you_pressed_end:
end 32768
Now you can assemble that into a TAP file like so:
$ pasmo --tapbas hello.z80 hello.tap
The final step is to load it in the emulator:
$ xspect -quick-load -load-immed -tap hello.tap
The reason I specifically chose that emulator was because it allows easily loading of a TAP file, without waiting for the tape to play, and without the use of any menus. (If you can tell me how to make FUSE auto-start like that, I'd love to hear!)
I wrote a small number of "CP/M emulation functions" allowing me to clear the screen, pause, prompt for input, and output text, which will work via the primitives available within the standard ZX Spectrum ROM. Then I reworked the game a little to cope with the different screen resolution (though only minimally, some of the text still breaks lines in unfortunate spots):
The end result is reasonably playable, even if it isn't quite as nice as the CP/M version (largely because of the unfortunate word-wrapping, and smaller console-area). So now my repository contains a .TAP file which can be loaded into your emulator of choice, available from the releases list.
Here's a brief teaser of what you can expect:
Outstanding bugs? Well the line-input is a bit horrid, and unfortunately this was written for CP/M accessed over a terminal - so I'd assumed a "standard" 80x25 resolution, which means that line/word-wrapping is broken in places.
That said it didn't take me too long to make the port, and it was kinda fun.
There has been a
flurry of activity on the Debian mailing lists ever since Steve
McIntyre raised the issue of including non-free firmware as part of
official Debian installation images.
Firstly I should point out that I am in complete agreement with Steve s proposal to include non-free firmware as part of an installation image. Likewise I think that we should have a separate archive section for firmware. Because without doing so it will soon become almost impossible to install onto any new hardware. However, as always the issue is more nuanced than a first glance would suggest.
Lets start by
defining what is firmware?
Firmware is any
software that runs outside the orchestration of the operating system.
Typically firmware will be executed on a processor(s) separate from
the processor(s) running the OS, but this does not need to be the
case.
As Debian we
are content that our systems can operate using fully free and open
source software and firmware. We can install our OS without needing
any non-free firmware.
This is an illusion!
Each and every PC
platform contains non-free firmware
It may be possible to run free firmware on some Graphics controllers, Wi-Fi chip-sets, or Ethernet cards and we can (and perhaps should) choose to spend our money on systems where this is the case. When installing a new system we might still be forced to hold our nose and install with non-free firmware on the peripheral before we are able to upgrade it to FLOSS firmware later if this is exists or is even possible to do so. However after the installation we are running a full FLOSS system in terms of software and firmware.
We all (almost
without exception) are running propitiatory firmware whether we like
it or not.
Even after carefully selecting graphics and network hardware with FLOSS firmware options we still haven t escaped from non-free-firmware. Other peripherals contain firmware too each keyboard, disk (SSDs and Spinning rust). Even your USB memory stick that you use to contain the Debian installation image contains a microcontroller and hence also contains firmware that runs on it.
Much of this
firmware can not even be updated.
Some can be
updated, but is stored in FLASH ROM and the hardware vendor has
defeated all programming methods (possibly circumnavigated with a
hardware mod).
Some of it can
be updated but requires external device programmers (and often the
programming connections are a series of test points dotted around
the board and not on a header in order to make programming as
difficult as possible).
Sometimes the
firmware can be updated from within the host operating system (i.e.
Debian)
Sometimes, as
Steve pointed out in his post, the hardware vendor has enough
firmware on a peripheral to perform basic functions perhaps
enough to install the OS, but requires additional firmware to enable
specific feature (i.e. higher screen resolutions, hardware
accelerated functions etc.)
Finally some
vendors don t even bother with any non-volatile storage beyond a
basic boot loader and firmware must be loaded before the device can
be used in any mode.
What about the motherboard? If we are lucky we might be able to run a FLOSS implementation of the UEFI subsystem (edk2/tianocore for example), indeed the non AMD64/i386 platforms based around ARM, MIPS architectures are often the most free when it comes to firmware.
What about the microcode on the processor? Personally I wasn t aware that that this was updatable firmware until the Spectre and Meltdown classes of vulnerabilities arose a few years back.
So back to Debian
images including non-free firmware.
This is specifically to address the last two use cases mentioned above, i.e. where firmware needs to be loaded to achieve a minimum functioning of a device. Although it could also include motherboard support, and microcode as well.
As far as I can tell
the proposal exists for several reasons:
#1 Because some freely distributable firmware is required for
more and more devices, in order to install Debian, or because whilst
Debian can be installed a desktop environment can not be started or
fully function
#2 Because frankly it is less work to produce, test and maintain fewer installation images As someone who performs tests on our images, this clearly gets my vote :-)
and perhaps most important of all..
#3 Because our least experienced users, and new users will download an official image and give up if things don t just work TM
Steve s proposal
option 5 would address theses issues and I fully support it.
I would love to see
separate repositories for firmware and firmware-none free.
Additionally to accompany firmware non-free I would like to have
information on what the firmware actually does. Can I run my
hardware without it, what function(s) are limited without the
firmware, better yet is there a FLOSS equivalent that I can load
instead? Is this something that we can present in Debian installer?
I would love not to require non-free firmware, but if I can t,
I would love if DI would enable a user to make an informed choice as
to what, if any, firmware is installed.
Should we be
requesting (requiring?) this information for any non-free firmware
image that we carry in the archive?
Finally lets
consider firmware in the wider general case, not just the case where
we need to load firmware from within Debian each and every boot.
Personally I am
annoyed whenever a hardware manufacturer has gone out of their way to
prevent firmware updates. Lets face it software contains bugs, and
we can assume that the software making up a firmware image will as
well.
Critical (security)
vulnerabilities found in firmware, especially if this runs on the
same processor(s) as the OS can impact on the wider system, not just
the device itself. This will mean that, without updatable firmware,
the hardware itself should be withdrawn from use whilst it would
otherwise still function. By preventing firmware updates vendors are
forcing early obsolescence in the hardware they sell, perhaps good
for their bottom line, but certainly no good for users or the
environment.
Here I can practice
what I preach. As an Electronic Engineer / Systems architect I have
been beating the drum for In System Updatable firmware for ALL
programmable devices in a system, be it a simple peripheral or a
deeply embedded system. I can honestly say that over the last 20
years (yes I have been banging this particular drum for that long) I
have had 100% success in arguing this case commercially. Having
device programmers in R&D departments is one thing, but that is
additional cost for production, and field service. Needing custom
programming headers or even a bed of nails fixture to connect your
target device to a programmer is more trouble than it is worth.
Finally, the ability to update firmware in the field means that you
can launch your product on schedule, make a sale and ship to a
customer even if the first thing that you need to do is download an
update. Offering that to any project manager will make you very
popular indeed.
So what if this
firmware is non-free? As long as the firmware resides in
non-volatile media without needing the OS to interact with it, we as
a project don t need to carry it in our archives. And we as
principled individuals can vote with our feet and wallets by choosing
to purchase devices that have free firmware.
But where that isn t
an option, I ll take updatable but non-free firmware over non-free
firmware that can not be updated any day of the week.
Sure, the
manufacture can choose to no longer support the firmware, and it is
shocking how soon this happens often in the consumer market, the
manufacture has withdrawn support for a product before it even
reaches the end user (In which case we should boycott that
manufacture in future until they either change their ways of go
bust). But again if firmware can be updated in system that
would at least allow the possibility of open firmware to arise.
Indeed the only commercial case I have seen to argue against
updatable firmware has been either for DRM, in which case good
lets get rid of both, or for RF licence compliance, and even then it
is tenuous because in this case the manufacture wants ISP for its own
use right up until a device is shipped out the door, typically
achived by blowing one time programmable fuse links .
TL;DR: firmware support in Debian sucks, and we need to change
this. See the "My preference, and rationale" Section below.
In my opinion, the way we deal with (non-free) firmware in Debian is a
mess, and this is hurting many of our users daily. For a long time
we've been pretending that supporting and including (non-free)
firmware on Debian systems is not necessary. We don't want to have
to provide (non-free) firmware to our users, and in an ideal world we
wouldn't need to. However, it's very clearly no longer a sensible path
when trying to support lots of common current hardware.
Background - why has (non-free) firmware become an issue?
Firmware is the low-level software that's designed to make
hardware devices work. Firmware is tightly coupled to the hardware,
exposing its features, providing higher-level functionality and
interfaces for other software to use. For a variety of reasons, it's
typically not Free Software.
For Debian's purposes, we typically separate firmware from
software by considering where the code executes (does it run on a
separate processor? Is it visible to the host OS?) but it can be
difficult to define a single reliable dividing line here. Consider the
Intel/AMD CPU microcode packages, or the U-Boot firmware packages as
examples.
In times past, all necessary firmware would normally be included
directly in devices / expansion cards by their vendors. Over time,
however, it has become more and more attractive (and therefore more
common) for device manufacturers to not include complete firmware
on all devices. Instead, some devices just embed a very simple set
of firmware that allows for upload of a more complete firmware "blob"
into memory. Device drivers are then expected to provide that blob
during device initialisation.
There are a couple of key drivers for this change:
Cost: it's typically cheaper to fit smaller flash memory (or no
flash at all) onto a device. The cost difference may seem small in
many cases, but reducing the bill of materials (BOM) even by a few
cents can make a substantial difference to the economics of a
product. For most vendors, they will have to implement device
drivers anyway and it's not difficult to include firmware in that
driver.
Flexibility: it's much easier to change the behaviour of a device by
simply changing to a different blob. This can potentially cover lots
of different use cases:
separating deadlines for hardware and software in manufacturing
(drivers and firmware can be written and shipped later);
bug fixes and security updates (e.g. CPU microcode changes);
changing configuration of a device for different users or products
(e.g. potentially different firmware to enable different
frequencies on a radio product);
changing fundamental device operation (e.g. switching between RAID
and JBOD functionality on a disk controller).
Due to these reasons, more and more devices in a typical computer now
need firmware to be uploaded at runtime for them to function
correctly. This has grown:
Going back 10 years or so, most computers only needed firmware
uploads to make WiFi hardware work.
A growing number of wired network adapters now demand firmware
uploads. Some will work in a limited way but depend on extra
firmware to allow advanced features like TCP segmentation offload
(TSO). Others will refuse to work at all without a firmware upload.
More and more graphics adapters now also want firmware uploads to
provide any non-basic functions. A simple basic (S)VGA-compatible
framebuffer is not enough for most users these days; modern desktops
expect 3D acceleration, and a lot of current hardware will not
provide that without extra firmware.
Current generations of common Intel-based laptops also need firmware
uploads to make audio work (see the firmware-sof-signed package).
At the beginning of this timeline, a typical Debian user would be able
to use almost all of their computer's hardware without needing any
firmware blobs. It might have been inconvenient to not be able to use
the WiFi, but most laptops had wired ethernet anyway. The WiFi could
always be enabled and configured after installation.
Today, a user with a new laptop from most vendors will struggle to
use it at all with our firmware-free Debian installation media. Modern
laptops normally don't come with wired ethernet now. There won't be
any usable graphics on the laptop's screen. A visually-impaired user
won't get any audio prompts. These experiences are not acceptable, by
any measure. There are new computers still available for purchase
today which don't need firmware to be uploaded, but they are growing
less and less common.
Current state of firmware in Debian
For clarity: obviously not all devices need extra firmware
uploading like this. There are many devices that depend on firmware
for operation, but we never have to think about them in normal
circumstances. The code is not likely to be Free Software, but it's
not something that we in Debian must spend our time on as we're not
distributing that code ourselves. Our problems come when our user
needs extra firmware to make their computer work, and they need/expect
us to provide it.
We have a small set of Free firmware binaries included in Debian main,
and these are included on our installation and live media. This is
great - we all love Free Software and this works.
However, there are many more firmware binaries that are not
Free. If we are legally able to redistribute those binaries, we
package them up and include them in the non-free section of the
archive. As Free Software developers, we don't like providing or
supporting non-free software for our users, but we acknowledge that
it's sometimes a necessary thing for them. This tension is
acknowledged in the Debian Free Software Guidelines.
This tension extends to our installation and live media. As non-free
is officially not considered part of Debian, our official
media cannot include anything from non-free. This has been a
deliberate policy for many years. Instead, we have for some time been
building a limited parallel set of "unofficial non-free" images which
include non-free firmware. These non-free images are produced by the
same software that we use for the official images, and by the same
team.
There are a number of issues here that make developers and users
unhappy:
Building, testing and publishing two sets of images takes more
effort.
We don't really want to be providing non-free images at all, from a
philosophy point of view. So we mainly promote and advertise
the preferred official free images. That can be a cause of
confusion for users. We do link to the non-free images in various
places, but they're not so easy to find.
Using non-free installation media will cause more installations to
use non-free software by default. That's not a great story for us,
and we may end up with more of our users using non-free software
and believing that it's all part of Debian.
A number of users and developers complain that we're wasting their
time by publishing official images that are just not useful for a
lot (a majority?) of users.
We should do better than this.
Options
The status quo is a mess, and I believe we can and should do
things differently.
I see several possible options that the images team can choose from
here. However, several of these options could undermine the principles
of Debian. We don't want to make fundamental changes like that
without the clear backing of the wider project. That's why I'm writing
this...
Keep the existing setup. It's horrible, but maybe it's the best we
can do? (I hope not!)
We could just stop providing the non-free unofficial images
altogether. That's not really a promising route to follow - we'd be
making it even harder for users to install our software. While
ideologically pure, it's not going to advance the cause of Free
Software.
We could stop pretending that the non-free images are unofficial,
and maybe move them alongside the normal free images so they're
published together. This would make them easier to find for people
that need them, but is likely to cause users to question why we
still make any images without firmware if they're otherwise
identical.
The images team technically could simply include non-free into
the official images, and add firmware packages to the input lists
for those images. However, that would still leave us with problem 3
from above (non-free generally enabled on most installations).
We could split out the non-free firmware packages into a new
non-free-firmware component in the archive, and allow a
specific exception only to allow inclusion of those packages on
our official media. We would then generate only one set of official
media, including those non-free firmware packages.
(We've already seen various suggestions in recent years to split
up the non-free component of the archive like this, for example
into non-free-firmware, non-free-doc, non-free-drivers,
etc. Disagreement (bike-shedding?) about the split caused us to not
make any progress on this. I believe this project should be picked
up and completed. We don't have to make a perfect solution here
immediately, just something that works well enough for our needs
today. We can always tweak and improve the setup incrementally if
that's needed.)
These are the most likely possible options, in my opinion. If you have
a better suggestion, please let us know!
I'd like to take this set of options to a GR, and do it soon. I want
to get a clear decision from the wider Debian project as to how to
organise firmware and installation images. If we do end up changing
how we do things, I want a clear mandate from the project to do that.
My preference, and rationale
Mainly, I want to see how the project as a whole feels here - this
is a big issue that we're overdue solving.
What would I choose to do? My personal preference would be to go
with option 5: split the non-free firmware into a special new
component and include that on official media.
Does that make me a sellout? I don't think so. I've been
passionately supporting and developing Free Software for more than
half my life. My philosophy here has not changed. However, this is a
complex and nuanced situation. I firmly believe that sharing software
freedom with our users comes with a responsibility to also make
our software useful. If users can't easily install and use Debian,
that helps nobody.
By splitting things out here, we would enable users to install and use
Debian on their hardware, without promoting/pushing higher-level
non-free software in general. I think that's a reasonable compromise.
This is simply a change to recognise that hardware requirements have
moved on over the years.
Further work
If we do go with the changes in option 5, there are other things we
could do here for better control of and information about non-free
firmware:
Along with adding non-free firmware onto media, when the installer
(or live image) runs, we should make it clear exactly which
firmware packages have been used/installed to support detected
hardware. We could link to docs about each, and maybe also to
projects working on Free re-implementations.
Add an option at boot to explicitly disable the use of the non-free
firmware packages, so that users can choose to avoid them.
Acknowledgements
Thanks to people who reviewed earlier versions of this document and/or
made suggestions for improvement, in particular:
For various obscure reasons, I have a mirror of Debian in one room and the main laptop and so on I use in another. The mirror is connected to a fast Internet line - and has a 1Gb Ethernet cable into the back directly from the router, the laptop and everything else - not so much, everything is wired, but depends on a WiFi link across the property. One end is fast - one end runs like a snail.Steve suggested I use a different tool to make images directly on the mirror machine - jigit. Slightly less polished than jigdo but - if you're on the same machine - blazingly fast. I just used it to make the Blu-Ray sized .iso and was very pleasantly surprised. jigit-mkimage -j [jigdo file] -t [template file] -m Debian=[path to mirror of Debian] -o [output filename] Another nice surprise for me - I have a horrible old Lenovo Ideapad. It's one of the Bay Trail Intel machines with a 32 bit UEFI and a 64 bit processor. I rescued it from the junk heap. Reinstalling it with an image today fixed an issue I had with slow boot and has turned it into an adequate machine for web browsing.All in all, I've done relatively few tests so far - but it's been a good day, as ever.More later.
TL;DR: procmail is a security liability and has been abandoned
upstream for the last two decades. If you are still using it, you
should probably drop everything and at least remove its SUID
flag. There are plenty of alternatives to chose from, and conversion
is a one-time, acceptable trade-off.
Procmail is unmaintained
procmail is unmaintained. The "Final release", according to
Wikipedia, dates back to September 10, 2001 (3.22). That release
was shipped in Debian since then, all the way back from Debian 3.0
"woody", twenty years ago.
Debian also ships 25 uploads on top of this, with 3.22-21 shipping the
"3.23pre" release that has been rumored since at least the November
2001, according to debian/changelog at least:
procmail (3.22-1) unstable; urgency=low
* New upstream release, which uses the standard' format for Maildir
filenames and retries on name collision. It also contains some
bug fixes from the 3.23pre snapshot dated 2001-09-13.
* Removed sendmail' from the Recommends field, since we already
have exim' (the default Debian MTA) and mail-transport-agent'.
* Removed suidmanager support. Conflicts: suidmanager (<< 0.50).
* Added support for DEB_BUILD_OPTIONS in the source package.
* README.Maildir: Do not use locking on the example recipe,
since it's wrong to do so in this case.
-- Santiago Vila <sanvila@debian.org> Wed, 21 Nov 2001 09:40:20 +0100
All Debian suites from buster onwards ship the 3.22-26 release,
although the maintainer just pushed a 3.22-27 release to fix a seven
year old null pointer dereference, after this article was drafted.
Procmail is also shipped in all major distributions: Fedora
and its derivatives, Debian derivatives, Gentoo, Arch,
FreeBSD, OpenBSD. We all seem to be ignoring this problem.
The upstream website (http://procmail.org/) has been down since
about 2015, according to Debian bug #805864, with no change
since.
In effect, every distribution is currently maintaining its fork of
this dead program.
Note that, after filing a bug to keep Debian from shipping
procmail in a stable release again, I was told that the
Debian maintainer is apparently in contact with the upstream. And,
surprise! they still plan to release that fabled 3.23 release, which
has been now in "pre-release" for all those twenty years.
In fact, it turns out that 3.23 is considered released already, and
that the procmail author actually pushed a 3.24 release, codenamed
"Two decades of fixes". That amounts to 25 commits since 3.23pre
some of which address serious security issues, but none of which
address fundamental issues with the code base.
Procmail is insecure
By default, procmail is installed SUIDroot:mail in
Debian. There's no debconf or pre-seed setting that can change
this. There has been two bug reports against the Debian to make this
configurable (298058, 264011), but both were closed to say
that, basically, you should use dpkg-statoverride to change the
permissions on the binary.
So if anything, you should immediately run this command on any host
that you have procmail installed on:
Note that this might break email delivery. It might also not work at
all, thanks to usrmerge. Not sure. Yes, everything is on
fire. This is fine.
In my opinion, even assuming we keep procmail in Debian, that default
should be reversed. It should be up to people installing procmail to
assign it those dangerous permissions, after careful consideration of
the risk involved.
The last maintainer of procmail explicitly advised us (in that null
pointer dereference bug) and other projects (e.g. OpenBSD, in [2])
to stop shipping it, back in 2014. Quote:
Executive summary: delete the procmail port; the code is not safe
and should not be used as a basis for any further work.
I just read some of the code again this morning, after the original
author claimed that procmail was active again. It's still littered
with bizarre macros like:
... from regexp.c, line 66 (yes, that's a custom regex
engine). Or this one:
#define jj (aleps.au.sopc)
It uses insecure functions like strcpyextensively. malloc()
is thrown around gotos like it's 1984 all over again. (To be fair,
it has been feeling like 1984 a lot lately, but that's another matter
entirely.)
That null pointer deref bug? It's fixed upstream now, in this
commit merged a few hours ago, which I presume might be in response
to my request to remove procmail from Debian.
So while that's nice, this is the just tip of the iceberg. I speculate
that one could easily find an exploitable crash in procmail if only by
running it through a fuzzer. But I don't need to speculate: procmail
had, for years, serious security issues that could possibly lead to
root privilege escalation, remotely exploitable if procmail is (as
it's designed to do) exposed to the network.
Maybe I'm overreacting. Maybe the procmail author will go through the
code base and do a proper rewrite. But I don't think that's what is in
the cards right now. What I expect will happen next is that people
will start fuzzing procmail, throw an uncountable number of bug
reports at it which will get fixed in a trickle while never fixing the
underlying, serious design flaws behind procmail.
Procmail has better alternatives
The reason this is so frustrating is that there are plenty of modern
alternatives to procmail which do not suffer from those problems.
Alternatives to procmail(1) itself are typically part of mail
servers. For example, Dovecot has its own LDA which implements
the standard Sieve language (RFC 5228). (Interestingly, Sieve
was published as RFC 3028 in 2001, before procmail was formally
abandoned.)
Courier also has "maildrop" which has its own filtering mechanism,
and there is fdm (2007) which is a fetchmail and procmail
replacement. Update: there's also mailprocessing, which is not
an LDA, but processing an existing folder. It was, however,
specifically designed to replace complex Procmail rules.
But procmail, of course, doesn't just ship procmail; that would just
be too easy. It ships mailstat(1) which we could probably ignore
because it only parses procmail log files. But more importantly, it
also ships:
lockfile(1) - conditional semaphore-file creator
formail(1) - mail (re)formatter
lockfile(1) already has a somewhat acceptable replacement in the form of
flock(1), part of util-linux (which is Essential, so installed on
any normal Debian system). It might not be a direct drop-in
replacement, but it should be close enough.
formail(1) is similar: the courier maildrop package ships
reformail(1) which is, presumably, a rewrite of formail. It's
unclear if it's a drop-in replacement, but it should probably possible
to port uses of formail to it easily.
Update: the maildrop package ships a SUID root binary (two,
even). So if you want only reformail(1), you might want to disable
that with:
It would be perhaps better to have reformail(1) as a separate
package, see bug 1006903 for that discussion.
The real challenge is, of course, migrating those old .procmailrc
recipes to Sieve (basically). I added a few examples in the appendix
below. You might notice the Sieve examples are easier to read, which
is a nice added bonus.
Conclusion
There is really, absolutely, no reason to keep procmail in Debian, nor
should it be used anywhere at this point.
It's a great part of our computing history. May it be kept forever in
our museums and historical archives, but not in Debian, and certainly not
in actual release.
It's just a bomb waiting to go off. It is irresponsible for
distributions to keep shipping obsolete and insecure software like
this for unsuspecting users.
Note that I am grateful to the author, I really am: I used procmail
for decades and it served me well. But now, it's time to move, not
bring it back from the dead.
Appendix
Previous work
It's really weird to have to write this blog post. Back in 2016, I
rebuilt my mail setup at home and, to
my horror, discovered that procmail had been abandoned for 15 years at
that point, thanks to that LWN article from 2010. I would have
thought that I was the only weirdo still running procmail after all
those years and felt kind of embarrassed to only "now" switch to the
more modern (and, honestly, awesome) Sieve language.
But no. Since then, Debian shipped three major releases (stretch,
buster, and bullseye), all with the same vulnerable procmail
release.
Then, in early 2022, I found that, at work, we actually had procmail
installed everywhere, possibly because userdir-ldap was using
it for lockfile until 2019. I sent a patch to fix that and scrambled
to remove get rid of procmail everywhere. That took about a day.
But many other sites are now in that situation, possibly not imagining
they have this glaring security hole in their infrastructure.
Procmail to Sieve recipes
I'll collect a few Sieve equivalents to procmail recipes here. If you
have any additions, do contact me.
All Sieve examples below assume you drop the file in ~/.dovecot.sieve.
deliver mail to "plus" extension folder
Say you want to deliver user+foo@example.com to the folder
foo. You might write something like this in procmail:
MAILDIR=$HOME/Maildir/
DEFAULT=$MAILDIR
LOGFILE=$HOME/.procmail.log
VERBOSE=off
EXTENSION=$1 # Need to rename it - ?? does not like $1 nor 1
:0
* EXTENSION ?? [a-zA-Z0-9]+
.$EXTENSION/
That, in sieve language, would be:
require ["variables", "envelope", "fileinto", "subaddress"];
########################################################################
# wildcard +extension
# https://doc.dovecot.org/configuration_manual/sieve/examples/#plus-addressed-mail-filtering
if envelope :matches :detail "to" "*"
# Save name in $ name in all lowercase
set :lower "name" "$ 1 ";
fileinto "$ name ";
stop;
Subject into folder
This would file all mails with a Subject: line having FreshPorts
in it into the freshports folder, and mails from alternc.org
mailing lists into the alternc folder:
:0
## mailing list freshports
* ^Subject.*FreshPorts.*
.freshports/
:0
## mailing list alternc
* ^List-Post.*mailto:.*@alternc.org.*
.alternc/
Automated script
There is a procmail2sieve.pl script floating around, and
mentioned in the dovecot documentation. It didn't work very well
for me: I could use it for small things, but I mostly wrote the sieve
file from scratch.
Progressive migration
Enrico Zini has progressively migrated his procmail setup to Sieve
using a clever way: he hooked procmail inside sieve so that he could
deliver to the Dovecot LDA and progressively migrate rules one by
one, without having a "flag day".
See this explanatory blog post for the details, which also shows
how to configure Dovecot as an LMTP server with Postfix.
Other examples
The Dovecot sieve examples are numerous and also quite useful. At
the time of writing, they include virus scanning and spam filtering,
vacation auto-replies, includes, archival, and flags.
Harmful considered harmful
I am aware that the "considered harmful" title has a long and
controversial history, being considered harmful in itself (by some
people who are obviously not afraid of contradictions).
I have nevertheless deliberately chosen that title, partly to make
sure this article gets maximum visibility, but more specifically
because I do not have doubts at this moment that procmail is, clearly,
a bad idea at this moment in history.
Developing story
I must also add that, incredibly, this story has changed while writing
it. This article is derived from this bug I filed in Debian to,
quite frankly, kick procmail out of Debian. But filing the bug had the
interesting effect of pushing the upstream into action: as mentioned
above, they have apparently made a new release and merged a bunch of
patches in a new git repository.
This doesn't change much of the above, at this moment. If anything
significant comes out of this effort, I will try to update this
article to reflect the situation. I am actually happy to retract the
claims in this article if it turns out that procmail is a stellar
example of defensive programming and survives fuzzing attacks. But at
this moment, I'm pretty confident that will not happen, at least not
in scope of the next Debian release cycle.
In the past I used to run a number of virtual machines, or dedicated hosts. Currently I'm cut things down to only a single machine which I'm planning to remove.
Email
Email used to be hosted via dovecot, and then read with mutt-ng on the host itself. Later I moved to reading mail with my own console-based email client.
Eventually I succumbed, and now I pay for Google's Workspace product.
Git Repositories
I used to use gitbucket for hosting a bunch of (mostly private) git repositories. A bad shutdown/reboot of my host trashed the internal database so that was broken.
I replaced the use of gitbucket, which was very pretty, with gitolite to perform access-control, and avoid the need of a binary database.
I merged a bunch of repositories, removed the secret things from there where possible, and finally threw them on a second github account. GPG-encryption added where appropriate.
Static Hosts
Static websites I used to host upon my own machine are now hosted via netlify.
There aren't many of them, and they are rarely updated, I guess I care less.
Dynamic Hosts
That leaves only dynamic hosts. I used to have a couple of these, most notably the debian-administration.org, but that was archived and the final commercial thing I did was retired in January.
I now have only one dynamic site up and running, https://api.steve.fi/, this provides two dynamic endpoints:
One to return data about trams coming to the stop near my house.
One to return the current temperature.
Both of these are used by my tram-display device. Running these two services locally, in Docker, would probably be fine.
However there is a third "secret" API - blog-comment submission.
When a comment is received upon this blog it is written to a local filesystem, and an email is sent to me. The next time my blog is built rsync is used to get the remote-comments and add them to the blog. (Spam deleted first, of course).
Locally the comments are added into the git-repository this blog is built from - and the remote files deleted now and again.
Maybe I should just switch from writing the blog-comment to disk, and include all the meta-data in the email? I don't wanna go connecting to Gmail via IMAP, but I could probably copy and paste from the email to my local blog-repository.
I can stop hosting the tram-APIs publicly, but the blog comment part is harder. I guess I just need to receive incoming FORM-submission, and send an email.
Maybe I host the existing container on fly.io, for free?
Maybe I write an AWS lambda function to do the necessary thing?
Or maybe I drop blog-comments and sidestep the problem entirely? After all I wrote five posts in the whole of last year ..
So in my previous post I mentioned that we were going to spend the Christmas period in the UK, which we did.
We spent a couple of days there, meeting my parents, and family. We also persuaded my sister to drive us to Scarborough so that we could hang out on the beach for an afternoon.
Finland has lots of lakes, but it doesn't have proper waves. So it was surprisingly good just to wade in the sea and see waves! Unfortunately our child was a wee bit too scared to ride on a donkey!
Unfortunately upon our return to Finland we all tested positive for COVID-19, me first, then the child, and about three days later my wife. We had negative tests in advance of our flights home, so we figure that either the tests were broken, or we were infected in the airplane/airport.
Thankfully things weren't too bad, we stayed indoors for the appropriate length of time, and a combination of a couple of neighbours and online shopping meant we didn't run out of food.
Since I've been back home I've been automating AWS activities with aws-utils, and updating my simple host-automation system, marionette.
Marionette is something that was inspired by puppet, the configuration management utility, but it runs upon localhost only. Despite the small number of integrated primitives it actually works surprisingly well, and although I don't expect it will ever become popular it was an interesting research project.
The aws-utilities? They were specifically put together because I've worked in a few places where infrastructure is setup with terraform, or cloudformation, but there are always the odd thing that is configured manually. Typically we'll have an openvpn gateway which uses a manually maintained IP allow-list, or some admin-server which has a security-group maintained somewhat manually.
Having the ability to update a bunch of rules with your external IP, as a single command, across a number of AWS accounts/roles, and a number of security-groups is an enormous time-saver when your home IP changes.
I'd quite like to add more things to that collection, but there's no particular rush.
I realize it has been quite some time since I last made a blog-post, so I guess the short version is "I'm still alive", or as Granny Weatherwax would have said:
I ATE'NT DEAD
Of course if I die now this would be an awkward post!
I can't think of anything terribly interesting I've been doing recently, mostly being settled in my new flat and tinkering away with things. The latest "new" code was something for controlling mpd via a web-browser:
This is a simple HTTP server which allows you to minimally control mpd running on localhost:6600. (By minimally I mean literally "stop", "play", "next track", and "previous track").
I have all my music stored on my desktop, I use mpd to play it locally through a pair of speakers plugged into that computer. Sometimes I want music in the sauna, or in the bedroom. So I have a couple of bluetooth speakers which are used to send the output to another room. When I want to skip tracks I just open the mpd-web site on my phone and tap the button. (I did look at android mpd-clients, but at the same time it seemed like installing an application for this was a bit overkill).
I guess I've not been doing so much "computer stuff" outside work for a year or so. I guess lack of time, lack of enthusiasm/motivation.
So looking forward to things? I'll be in the UK for a while over Christmas, barring surprises. That should be nice as I'll get to see family, take our child to visit his grandparents (on his birthday no less) and enjoy playing the "How many Finnish people can I spot in the UK?" game
I grew up with the Internet and its ethics and politics have always
been important in my life. But I have also been involved at other
levels, against police brutality, for Food, Not Bombs,
worker autonomy, software freedom, etc. For a long time,
that all seemed coherent.
But the more I look at the modern Internet -- and the
mega-corporations that control it -- and the less confidence I have in
my original political analysis of the liberating potential of
technology. I have come to believe that most of our technological
development is harmful to the large majority of the population of the
planet, and of course the rest of the biosphere. And now I feel this
is not a new problem.
This is because the Internet is a neo-colonial device, and has been
from the start. Let me explain.
What is Neo-Colonialism?
The term "neo-colonialism" was coined by Kwame Nkrumah,
first president of Ghana. In Neo-Colonialism, the Last Stage of
Imperialism (1965), he wrote:
In place of colonialism, as the main instrument of imperialism, we
have today neo-colonialism ... [which] like colonialism, is an
attempt to export the social conflicts of the capitalist
countries. ...
The result of neo-colonialism is that foreign capital is used for
the exploitation rather than for the development of the less
developed parts of the world. Investment, under neo-colonialism,
increases, rather than decreases, the gap between the rich and the
poor countries of the world.
So basically, if colonialism is Europeans bringing genocide, war,
and its religion to the Africa, Asia, and the Americas,
neo-colonialism is the Americans (note the "n") bringing capitalism to
the world.
Before we see how this applies to the Internet, we must therefore make
a detour into US history. This matters, because anyone would be
hard-pressed to decouple neo-colonialism from the empire under which
it evolves, and here we can only name the United States of America.
US Declaration of Independence
Let's start with the United States declaration of independence
(1776). Many Americans may roll their eyes at this, possibly because
that declaration is not actually part of the US constitution and
therefore may have questionable legal standing. Still, it was
obviously a driving philosophical force in the founding of the
nation. As its author, Thomas Jefferson, stated:
it was intended to be an expression of the American mind, and to
give to that expression the proper tone and spirit called for by the
occasion
In that aging document, we find the following pearl:
We hold these truths to be self-evident, that all men are created
equal, that they are endowed by their Creator with certain
unalienable Rights, that among these are Life, Liberty and the
pursuit of Happiness.
As a founding document, the Declaration still has an impact in the
sense that the above quote has been called an:
"immortal declaration", and "perhaps [the] single phrase" of the
American Revolutionary period with the greatest "continuing
importance." (Wikipedia)
Let's read that "immortal declaration" again: "all men are created
equal". "Men", in that context, is limited to a certain number of
people, namely "property-owning or tax-paying white males, or about
6% of the population". Back when this was written, women didn't
have the right to vote, and slavery was legal. Jefferson himself owned
hundreds of slaves.
The declaration was aimed at the King and was a list of
grievances. A concern of the colonists was that the King:
has excited domestic insurrections amongst us, and has endeavoured
to bring on the inhabitants of our frontiers, the merciless Indian
Savages whose known rule of warfare, is an undistinguished
destruction of all ages, sexes and conditions.
This is a clear mark of the frontier myth which paved the way for
the US to exterminate and colonize the territory some now call the
United States of America.
The declaration of independence is obviously a colonial document,
having being written by colonists. None of this is particularly
surprising, historically, but I figured it serves as a good reminder
of where the Internet is coming from, since it was born in the US.
A Declaration of the Independence of Cyberspace
Two hundred and twenty years later, in 1996, John Perry Barlow
wrote a declaration of independence of cyberspace. At this
point, (almost) everyone has a right to vote (including women),
slavery was abolished (although some argue it still exists in the
form of the prison system); the US has made tremendous
progress. Surely this text will have aged better than the previous
declaration it is obviously derived from. Let's see how it reads today
and how it maps to how the Internet is actually built now.
Borders of Independence
One of the key ideas that Barlow brings up is that "cyberspace does
not lie within your borders". In that sense, cyberspace is the final
frontier: having failed to colonize the moon, Americans turn
inwards, deeper into technology, but still in the frontier
ideology. And indeed, Barlow is one of the co-founder of the
Electronic Frontier Foundation (the beloved EFF), founded six
years prior.
But there are other problems with this idea. As Wikipedia quotes:
The declaration has been criticized for internal
inconsistencies.[9] The declaration's assertion that
'cyberspace' is a place removed from the physical world has also
been challenged by people who point to the fact that the Internet is
always linked to its underlying geography.[10]
And indeed, the Internet is definitely a physical object. First
controlled and severely restricted by "telcos" like AT&T, it was
somewhat "liberated" from that monopoly in 1982 when an anti-trust
lawsuitbroke up the monopoly, a key historical event that,
one could argue, made the Internet possible.
(From there on, "backbone" providers could start competing and emerge,
and eventually coalesce into new monopolies: Google has a monopoly on
search and advertisement, Facebook on communications for a few
generations, Amazon on storage and computing, Microsoft on hardware,
etc. Even AT&T is now pretty much as consolidated as it was
before.)
The point is: all those companies have gigantic data centers and
intercontinental cables. And those are definitely prioritizing the
western world, the heart of the empire. Take for example Google's
latest 3,900 mile undersea cable: it does not connect Argentina to
South Africa or New Zealand, it connects the US to UK and
Spain. Hardly a revolutionary prospect.
Private Internet
But back to the Declaration:
Do not think that you can build it, as though it were a public
construction project. You cannot. It is an act of nature and it
grows itself through our collective actions.
In Barlow's mind, the "public" is bad, and private is good,
natural. Or, in other words, a "public construction project" is
unnatural. And indeed, the modern "nature" of development is private:
most of the Internet is now privately owned and operated.
I must admit that, as an anarchist, I loved that sentence when I read
it. I was rooting for "us", the underdogs, the revolutionaries. And,
in a way, I still do: I am on the board of Koumbit and work for a
non-profit that has pivoted towards censorship and surveillance
evasion. Yet I cannot help but think that, as a whole, we have failed
to establish that independence and put too much trust in private
companies. It is obvious in retrospect, but it was not, 30 years
ago.
Now, the infrastructure of the Internet has zero accountability to
traditional political entities supposedly representing the people, or
even its users. The situation is actually worse than when the US was
founded (e.g. "6% of the population can vote"), because the owners of the
tech giants are only a handful of people who can override any
decision. There's only one Amazon CEO, he's called Jeff Bezos, and he
has total control. (Update: Bezos actually ceded the CEO role to Andy
Jassy, AWS and Amazon music founder, while remaining executive
chairman. I would argue that, as the founder and the richest man
on earth, he still has strong control over Amazon.)
Social Contract
Here's another claim of the Declaration:
We are forming our own Social Contract.
I remember the early days, back when "netiquette" was a word, it
did feel we had some sort of a contract. Not written in standards of
course -- or barely (see RFC1855) -- but as a tacit
agreement. How wrong we were. One just needs to look at Facebook to
see how problematic that idea is on a global network.
Facebook is the quintessential "hacker" ideology put in practice. Mark
Zuckerberg explicitly refused to be "arbiter of truth" which
implicitly means he will let lies take over its platforms.
He also sees Facebook as place where everyone is equal, something
that echoes the Declaration:
We are creating a world that all may enter without privilege or
prejudice accorded by race, economic power, military force, or
station of birth.
(We note, in passing, the omission of gender in that list, also
mirroring the infamous "All men are created equal" claim of the US
declaration.)
As the Wall Street Journal's (WSJ) Facebook files later shown,
both of those "contracts" have serious limitations inside Facebook. There are
VIPs who systematically bypass moderation systems including
fascists and rapists. Drug cartels and human traffickers
thrive on the platform. Even when Zuckerberg himself tried to
tame the platform -- to get people vaccinated or to make it
healthier -- he failed: "vaxxer" conspiracies multiplied and
Facebook got angrier.
This is because the "social contract" behind Facebook and those large
companies is a lie: their concern is profit and that means
advertising, "engagement" with the platform, which causes increased
anxiety and depression in teens, for example.
Facebook's response to this is that they are working really hard on
moderation. But the truth is that even that system is severely
skewed. The WSJ showed that Facebook has translators for only 50
languages. It's a surprisingly hard to count human languages but
estimates range the number of distinct languages between 2500
and 7000. So while 50 languages seems big at first, it's actually a
tiny fraction of the human population using Facebook. Taking the first
50 of the Wikipedia list of languages by native speakers we omit
languages like Dutch (52), Greek (74), and Hungarian (78), and that's
just a few random nations picks from Europe.
As an example, Facebook has trouble moderating even a major language
like Arabic. It censored content from legitimate Arab news sources
when they mentioned the word al-Aqsa because Facebook associates
it with the al-Aqsa Martyrs' Brigades when they were talking
about the Al-Aqsa Mosque... This bias against Arabs also shows
how Facebook reproduces the American colonizer politics.
The WSJ also pointed out that Facebook spends only 13% of its
moderation efforts outside of the US, even if that represents 90% of
its users. Facebook spends three more times moderating on "brand
safety", which shows its priority is not the safety of its users, but
of the advertisers.
Military Internet
Sergey Brin and Larry Page are the Lewis and Clark of
our generation. Just like the latter were sent by Jefferson (the same)
to declare sovereignty over the entire US west coast, Google declared
sovereignty over all human knowledge, with its mission statement "to
organize the world's information and make it universally accessible
and useful". (It should be noted that Page somewhat questioned that
mission but only because it was not ambitious enough, Google
having "outgrown" it.)
The Lewis and Clark expedition, just like Google, had a scientific
pretext, because that is what you do to colonize a world,
presumably. Yet both men were military and had to receive scientific
training before they left. The Corps of Discovery was made up of
a few dozen enlisted men and a dozen civilians, including York an
African American slave owned by Clark and sold after the
expedition, with his final fate lost in history.
And just like Lewis and Clark, Google has a strong military
component. For example, Google Earth was not originally built at
Google but is the acquisition of a company called Keyhole which had
ties with the CIA. Those ties were brought inside Google during
the acquisition. Google's increasing investment inside the
military-industrial complex eventually led Google to workers
organizing a revolt although it is currently unclear to me how
much Google is involved in the military apparatus. Other companies,
obviously, do not have such reserve, with Microsoft, Amazon, and
plenty of others happily bidding on military contracts all the time.
Spreading the Internet
I am obviously not the first to identify colonial structures in the
Internet. In an article titled The Internet as an Extension of
Colonialism, Heather McDonald correctly identifies fundamental
problems with the "development" of new "markets" of Internet
"consumers", primarily arguing that it creates a digital divide
which creates a "lack of agency and individual freedom":
Many African people have gained access to these technologies but not
the freedom to develop content such as web pages or social media
platforms in their own way. Digital natives have much more power and
therefore use this to create their own space with their own norms,
shaping their online world according to their own outlook.
But the digital divide is certainly not the worst problem we have to
deal with on the Internet today. Going back to the Declaration, we
originally believed we were creating an entirely new world:
This governance will arise according to the conditions of our
world, not yours. Our world is different.
How I dearly wished that was true. Unfortunately, the Internet is
not that different from the offline world. Or, to be more accurate,
the values we have embedded in the Internet, particularly of free
speech absolutism, sexism, corporatism, and exploitation, are now
exploding outside of the Internet, into the "real" world.
The Internet was built with free software which, fundamentally, was
based on quasi-volunteer labour of an elite force of white men with
obviously too much time on their hands (and also: no children). The
mythical writing of GCC and Emacs by Richard Stallman is a good
example of this, but the entirety of the Internet now seems to be
running on random bits and pieces built by hit-and-run programmers
working on their copious free time. Whenever any of those fails,
it can compromise or bring down entire systems. (Heck, I wrote
this article on my day off...)
This model of what is fundamentally "cheap labour" is spreading out
from the Internet. Delivery workers are being exploited to the bone by
apps like Uber -- although it should be noted that workers organise
and fight back. Amazon workers are similarly exploited beyond
belief, forbidden to take breaks until they pee in bottles, with
ambulances nearby to carry out the bodies. During peak of the
pandemic, workers were being dangerously exposed to the virus in
warehouses. All this while Amazon is basically taking over the entire
economy.
The Declaration culminates with this prophecy:
We will spread ourselves across the Planet so that no one can arrest
our thoughts.
This prediction, which first felt revolutionary, is now chilling.
Colonial Internet
The Internet is, if not neo-colonial, plain colonial. The US colonies
had cotton fields and slaves, we have disposable cell phones and
Foxconn workers. Canada has its cultural genocide, Facebook
has his own genocides in Ethiopia, Myanmar, and mob violence
in India. Apple is at least implicitly accepting the Uyghur
genocide. And just like the slaves of the colony, those atrocities
are what makes the empire run.
The Declaration actually ends like this, a quote which I have in my
fortune cookies file:
We will create a civilization of the Mind in Cyberspace. May it be
more humane and fair than the world your governments have made
before.
That is still inspiring to me. But if we want to make "cyberspace"
more humane, we need to decolonize it. Work on cyberpeace instead of
cyberwar. Establish clear code of conduct, discuss ethics, and
question your own privileges, biases, and culture. For me the first
step in decolonizing my own mind is writing this article. Breaking
uptech monopolies might be an important step, but it won't be
enough: we have to do a culture shift as well, and that's the hard
part.
Appendix: an apology to Barlow
I kind of feel bad going through Barlow's declaration like this, point
by point. It is somewhat unfair, especially since Barlow passed away a
few years ago and cannot mount a response (even humbly assuming that
he might read this). But then again, he himself recognized he was
a bit too "optimistic" in 2009, saying: "we all get older and
smarter":
I'm an optimist. In order to be libertarian, you have to be an
optimist. You have to have a benign view of human nature, to believe
that human beings left to their own devices are basically good. But
I'm not so sure about human institutions, and I think the real point
of argument here is whether or not large corporations are human
institutions or some other entity we need to be thinking about
curtailing. Most libertarians are worried about government but not
worried about business. I think we need to be worrying about
business in exactly the same way we are worrying about government.
And, in a sense, it was a little naive to expect Barlow to not be a
colonist. Barlow is, among many things, a cattle rancher who grew up
on a colonial ranch in Wyoming. The ranch was founded in 1907 by his
great uncle, 17 years after the state joined the Union, and only a
generation or two after the Powder River War (1866-1868) and
Black Hills War (1876-1877) during which the US took over lands
occupied by Lakota, Cheyenne, Arapaho, and other native American
nations, in some of the last major First Nations Wars.
Appendix: further reading
There is another article that almost has the same title as this one:
Facebook and the New Colonialism. (Interestingly, the <title>
tag on the article is actually "Facebook the Colonial Empire" which I
also find appropriate.) The article is worth reading in full, but I
loved this quote so much that I couldn't resist reproducing it here:
Representations of colonialism have long been present in digital
landscapes. ( Even Super Mario Brothers, the video game designer
Steven Fox told me last year. You run through the landscape, stomp
on everything, and raise your flag at the end. ) But web-based
colonialism is not an abstraction. The online forces that shape a
new kind of imperialism go beyond Facebook.
It goes on:
Consider, for example, digitization projects that focus primarily on
English-language literature. If the web is meant to be humanity s new
Library of Alexandria, a living repository for all of humanity s
knowledge, this is a problem. So is the fact that the vast majority of
Wikipedia pages are about a relatively tiny square of the planet. For
instance, 14 percent of the world s population lives in Africa, but
less than 3 percent of the world s geotagged Wikipedia articles
originate there, according to a 2014 Oxford Internet Institute
report.
And they introduce another definition of Neo-colonialism, while
warning about abusing the word like I am sort of doing here:
I m loath to toss around words like colonialism but it s hard to
ignore the family resemblances and recognizable DNA, to wit, said
Deepika Bahri, an English professor at Emory University who focuses
on postcolonial studies. In an email, Bahri summed up those
similarities in list form:
ride in like the savior
bandy about words like equality, democracy, basic rights
mask the long-term profit motive (see 2 above)
justify the logic of partial dissemination as better than nothing
partner with local elites and vested interests
accuse the critics of ingratitude
In the end, she told me, if it isn t a duck, it shouldn t quack
like a duck.
Another good read is the classic Code and other laws of
cyberspace (1999, free PDF) which is also critical of
Barlow's Declaration. In "Code is law", Lawrence Lessig argues that:
computer code (or "West Coast Code", referring to Silicon Valley)
regulates conduct in much the same way that legal code (or "East
Coast Code", referring to Washington, D.C.) does (Wikipedia)
And now it feels like the west coast has won over the east coast, or
maybe it recolonized it. In any case, Internet now christens
emperors.
So, since I registered the URL for serving the unofficial Debian
images for the Raspberry computers, raspi.debian.net, in April 2020,
I had been hosting it in my Dreamhost webspace.
Over two years ago yes, before I finished setting it up in Dreamhost
Steve McIntyre approached me and invited me to host the images under
the Debian cdimages user group. I told him I d first just get the
setup running, and later I would approach him for finalizing the
setup.
Then, I set up the build on my own server, hosted on my Dreamhost
account and forgot about it for many months. Last month, there was
a not particularly happy flamewar in
debian-arm@lists.debian.org
finished with me stating I would be moving the hosting to Debian
infrastructure
soon.
Well It took me a bit over a month to get this sorted out, together
with several days of half-broken links, but it is finally done:
raspi.debian.net is a CNAME for
ftp.acc.umu.se, which is the same system that hosts
cdimage.debian.org.
And, of course it is also reachable as
https://cdimage.debian.org/cdimage/unofficial/raspi/
looks more official, but is less memorable
Thanks a lot to Steve for the nudging, and to maswan to help
finalizing the setup.
What next? Well, the images are being built on my server. I d love to
move the builder over to Debian machines as well. When? How? That s
still in the air.